16 research outputs found

    Shared Frontend for Manycore Server Processors

    Get PDF
    Instruction-supplymechanisms, namely the branch predictors and instruction prefetchers, exploit recurring control flow in an application to predict the applicationĂąs future control flow and provide the core with a useful instruction stream to execute in a timely manner. Consequently, instruction-supplymechanisms aggressively incorporate control-flow condition, target, and instruction cache access information (i.e., control-flow metadata) to improve performance. Despite their high accuracy, thus performance benefits, these predictors lead to major silicon provisioning due to their metadata storage overhead. The storage overhead is further aggravated by the increasing core counts and more complex software stacks leading to major metadata redundancy: (i) across cores as the metadata of cores running a given server workload significantly overlap, (ii) within a core as the control-flowmetadata maintained by disparate instruction-supplymechanisms overlap significantly. In this thesis, we identify the sources of redundancy in the instruction-supply metadata and provide mechanisms to share metadata across cores and unify metadata for disparate instruction-supply mechanisms. First, homogeneous server workloads running on many cores allow for metadata sharing across cores, as each core executes the same types of requests and exhibits the same control flow. Second, the control-flow metadata maintained by individual instruction-supply mechanisms, despite being at different granularities (i.e., instruction vs. instruction block), overlap significantly, allowing for unifying their metadata. Building on these two observations, we eliminate the storage overhead stemming from metadata redundancy inmanycore server processors through a specialized shared frontend, which enables sharing metadata across cores and unifying metadata within a core without sacrificing the performance benefits provided by private and disparate instruction-supply mechanisms

    Confluence: Unified Instruction Supply for Scale-out Servers

    Get PDF
    Multi-megabyte instruction working sets of server workloads defy the capacities of latency-critical instruction-supply components of a core; the instruction cache (L1-I) and the branch target buffer (BTB). Recent work has proposed dedicated prefetching techniques aimed separately at L1-I and BTB, resulting in high metadata costs and/or only modest performance improvements due to the complex control-flow histories required to effectively fill the two components ahead of the core's fetch stream. This work makes the observation that the metadata for both the L1-I and BTB prefetchers require essentially identical information; the control-flow history. While the L1-I prefetcher necessitates the history at block granularity, the BTB requires knowledge of individual branches inside each block. To eliminate redundant metadata and multiple prefetchers, we introduce Confluence -- a frontend design with unified metadata for prefetching into both L1-I and BTB, whose contents are synchronized. Confluence leverages a stream-based prefetcher to proactively fill both components ahead of the core's fetch stream. The prefetcher maintains the control-flow history at block granularity and for each instruction block brought into the L1-I, eagerly inserts the set of branch targets contained in the block into the BTB. Confluence provides 85% of the performance improvement provided by an ideal frontend (with a perfect L1-I and BTB) with 1% area overhead per core, while the highest-performance alternative delivers only 62% of the ideal performance improvement with a per-core area overhead of 8%

    Unison Cache: A Scalable and Effective Die-Stacked DRAM Cache

    Get PDF
    Recent research advocates large die-stacked DRAM caches in manycore servers to break the memory latency and bandwidth wall. To realize their full potential, die-stacked DRAM caches necessitate low lookup latencies, high hit rates and the efficient use of off-chip bandwidth. Today's stacked DRAM cache designs fall into two categories based on the granularity at which they manage data: block-based and page-based. The state-of-the-art block-based design, called Alloy Cache, colocates a tag with each data block (e.g., 64B) in the stacked DRAM to provide fast access to data in a single DRAM access. However, such a design suffers from low hit rates due to poor temporal locality in the DRAM cache. In contrast, the state-of-the-art page-based design, called Footprint Cache, organizes the DRAM cache at page granularity (e.g., 4KB), but fetches only the blocks that will likely be touched within a page. In doing so, the Footprint Cache achieves high hit rates with moderate on-chip tag storage and reasonable lookup latency. However, multi-gigabyte stacked DRAM caches will soon be practical and needed by server applications, thereby mandating tens of MBs of tag storage even for page-based DRAM caches. We introduce a novel stacked-DRAM cache design, Unison Cache. Similar to Alloy Cache's approach, Unison Cache incorporates the tag metadata directly into the stacked DRAM to enable scalability to arbitrary stacked-DRAM capacities. Then, leveraging the insights from the Footprint Cache design, Unison Cache employs large, page-sized cache allocation units to achieve high hit rates and reduction in tag overheads, while predicting and fetching only the useful blocks within each page to minimize the off-chip traffic. Our evaluation using server workloads and caches of up to 8GB reveals that Unison cache improves performance by 14% compared to Alloy Cache due to its high hit rate, while outperforming the state-of-the art page-based designs that require impractical SRAM-based tags of around 50MB

    From A to E: Analyzing TPC’s OLTP Benchmarks -- The obsolete, the ubiquitous, the unexplored

    Get PDF
    Introduced in 2007, TPC-E is the most recently standardized OLTP benchmark by TPC. Even though TPC-E has already been around for six years, it has not gained the popularity of its predecessor TPC-C: all the published results for TPC-E use a single database vendor’s product. TPC-E is significantly different than its predecessors. Some of its distinguishing characteristics are the non-uniform input creation, longer-running and more complicated transactions, more difficult partitioning etc. These factors slow down the adoption of TPC-E. In turn, there is little knowledge in the community about how TPC-E behaves micro-architecturally and within the database engine. To shed light on TPC-E, we implement it on top of a scalable open-source database engine, Shore-MT, and perform a workload characterization study, comparing it with the previous, much better known OLTP benchmarks of TPC: TPC-B and TPC-C. In parallel, we study the evolution of the OLTP benchmarks throughout the decades. Our results demonstrate that TPC-E exhibits similar micro-architectural behavior to TPC-B and TPC-C, even though it incurs less stall time and higher instructions per cycle. On the other hand, within the database engine it suffers more from logical lock contention. Therefore, we argue that, on the hardware side, TPC-E needs less aggressive processors. Whereas on the software side it can benefit from designs based on intra-transaction parallelism, logical partitioning, and optimistic concurrency control to minimize the effects of lock contention without introducing distributed transactions

    Clearing the Clouds: A Study of Emerging Scale-out Workloads on Modern Hardware

    Get PDF
    Emerging scale-out workloads require extensive amounts of computational resources. However, data centers using modern server hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing server efficiency to ensure that server hardware closely matches the needs of scale-out workloads. We use performance counters on modern servers to study a wide range of scale-out workloads, finding that today’s predominant processor micro-architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the workload needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core micro-architecture. Moreover, while today’s predominant micro-architecture is inefficient when executing scale-out workloads, we find that continuing the current trends will further exacerbate the inefficiency in the future. In this work, we identify the key micro-architectural needs of scale-out workloads, calling for a change in the trajectory of server processors that would lead to improved computational density and power efficiency in data centers

    A Case for Specialized Processors for Scale-Out Workloads

    Get PDF
    Emerging scale-out workloads need extensive amounts of computational resources. However, datacenters using modern server hardware face physical constraints in space and power, limiting further expansion and requiring improvements in the computational density per server and in the per-operation energy. Continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing server efficiency. In this work, we demonstrate that modern server processors are highly inefficient for running cloud workloads. To address this problem, we investigate the microarchitectural behavior of scale-out workloads and present opportunities to enable specialized processor designs that closely match the needs of the cloud

    Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware

    Get PDF
    Emerging scale-out cloud applications need extensive amounts of computational resources. However, data centers using modern server hardware face physical constraints in space and power, limiting further expansion and calling for improvements in the computational density per server and in the per-operation energy use. Therefore, continuing to improve the computational resources of the cloud while staying within physical constraints mandates optimizing server efficiency to ensure that server hardware closely matches the needs of scale-out cloud applications. We use performance counters on modern servers to study a wide range of cloud applications, finding that today’s predominant processor architecture is inefficient for running these workloads. We find that inefficiency comes from the mismatch between the application needs and modern processors, particularly in the organization of instruction and data memory systems and the processor core architecture. Moreover, while today’s predominant architectures are inefficient when executing scale-out cloud applications, we find that the current hardware trends further exacerbate the mismatch. In this work, we identify the key micro-architectural needs of cloud applications, calling for a change in the trajectory of server processors that would lead to improved computational density and power efficiency in data centers

    Reducing the Energy Dissipation of the Issue Queue by Exploiting Narrow Immediate Operands

    No full text
    In contemporary superscalar microprocessors, issue queue is a considerable energy dissipating component due its complex scheduling logic. In addition to the energy dissipated for scheduling activities, read and write lines of the issue queue entries are also high energy consuming pieces of the issue queue. When these lines are used for reading and writing unnecessary information bits, such as the immediate operand part of an instruction that does not use the immediate field or the insignificant higher order bits of an immediate operand that are in fact not needed, significant amount of energy is wasted. In this paper, we propose two techniques to reduce the energy dissipation of the issue queue by exploiting the immediate operand files of the stored instructions: firstly by storing immediate operands in separate immediate operand files rather than storing them inside the issue queue entries and secondly by issue queue partitioning based on widths of immediate operands of instructions. We present our performance results and energy savings using a cycle accurate simulator and testing the design with SPEC2K benchmarks and 90 nm CMOS (UMC) technology

    Unified prefetching into instruction cache and branch target buffer

    No full text
    A system and method of coupling a Branch Target Buffer (BTB) content of a BTB with an instruction cache content of an instruction cache. The method includes: tagging a plurality of target buffer entries that belong to branches within a same instruction block with a corresponding instruction block address and a branch bitmap to indicate individual branches in the block; coupling an overflow buffer with the BTB to accommodate further target buffer entries of instruction blocks, distinct from the plurality of target buffer entries, which have more branches than the bundle is configured to accommodate in the corresponding instruction's bundle in the BTB; and predicting the instructions or the instruction blocks that are likely to be fetched by the core in the future and fetch those instructions from the lower levels of the memory hierarchy proactively by means of a prefetcher
    corecore